Worst-Case Analysis of the Perceptron and Exponentiated Update Algorithms

نویسنده

  • Tom Bylander
چکیده

The absolute loss is the absolute difference between the desired and predicted outcome. This paper demonstrates worst-case upper bounds on the absolute loss for the Perceptron learning algorithm and the Exponentiated Update learning algorithm, which is related to the Weighted Majority algorithm. The bounds characterize the behavior of the algorithms over any sequence of trials, where each trial consists of an example and a desired outcome interval (any value in the interval is an acceptable outcome). The worst-case absolute loss of both algorithms is bounded by: the absolute loss of the best linear function in a comparison class, plus a constant dependent on the initial weight vector, plus a per-trial loss. The per-trial loss can be eliminated if the learning algorithm is allowed a tolerance from the desired outcome. For concept learning, the worst-case bounds lead to mistake bounds that are comparable to past results. ∗This paper is a revised and extended version of Bylander [4].

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Worst-Case Absolute Loss Bounds for Linear Learning Algorithms

The absolute loss is the absolute difference between the desired and predicted outcome. I demonstrate worst-case upper bounds on the absolute loss for the perceptron algorithm and an exponentiated update algorithm related to the Weighted Majority algorithm. The bounds characterize the behavior of the algorithms over any sequence of trials, where each trial consists of an example and a desired o...

متن کامل

Learning Linear Functions with Quadratic and Linear Multiplicative Updates

We analyze variations of multiplicative updates for learning linear functions online. These can be described as substituting exponentiation in the Exponentiated Gradient (EG) algorithm with quadratic and linear functions. Both kinds of updates substitute exponentiation with simpler operations and reduce dependence on the parameter that specifies the sum of the weights during learning. In partic...

متن کامل

Modeling and analysis of leishmaniasis distribution process using multilayer perceptron neural network and support vector regression (Case study: villages of Isfahan province)

Villages located in Isfahan province are one of the areas prone to the spread of cutaneous leishmaniasis, which is characterized by the occurrence of wounds on the skin. To predict the future prevalence of cutaneous leishmaniasis, Continuous monitoring of the spatial distribution of this disease is essential. Disease modeling was performed using two machine learning algorithms called support ve...

متن کامل

Exponentiated Gradient Versus Gradient Descent for Linear Predictors

We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gra...

متن کامل

Relative loss bounds for single neurons

We analyze and compare the well-known gradient descent algorithm and the more recent exponentiated gradient algorithm for training a single neuron with an arbitrary transfer function. Both algorithms are easily generalized to larger neural networks, and the generalization of gradient descent is the standard backpropagation algorithm. In this paper we prove worst-case loss bounds for both algori...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Artif. Intell.

دوره 106  شماره 

صفحات  -

تاریخ انتشار 1998